**Final Project Report** (#) Lejie LIU (f007gkf) (#) Motivational image
    The theme "Reflections on Time" reminds me
     of the old-fashioned clock, which I created for my CS22 3D Modeling course,
      a nearly burnt-out candle,
     and some small objects that used to be displayed in my
     childhood home.
     
Clock Model I Made on CS22
(#) Implemented Features
    1. Extra Emitters: Point Light, Directional Light and SpotLight
    2. Parellization Rendering with Nanothread
    3. Moderate BSDF: Rough Conductor Material
    4. Adaptive Sampling with Denoise
        3.1 Implement basic NL-means Denoise Function
        3.2 Implement joint NL-means Denoise Function with normal and albedo feature buffers.
        3.3 Adaptive Sampling with Pixel Variance Estimates
        3.4 Integrate Intel's Open Image Denoise
    5. Homogeneous Scattering Participating Media

(##) 1. Extra Emitters
Below lights has been implemented according to blender's lights:
    I implemented all lights as child of Surface class, which made them compatible
    with other area lights made of simple geometries or meshes when sampling lights.

    1. Directional Light
    Code: src/surfaces/directional_light.cpp
    Parameters:
        + Angle: this determines the angular size of the light source.
        + Irradiance
        + Transform
    Validation:
    (Angle mainly contributes to the shadow softness)
    
Directional Light angle = 0 Ref from Blender Directional Light angle = 10 Ref from Blender
2. Point Light Code: src/surfaces/point_light.cpp Parameters: + Radius: determine the sampling size of the light source, affect the softness of shadow. + Power + Transform Validation:
Point Light angle = 0 Point Light angle = 2
3. Spot Light Code: src/surfaces/spot_light.cpp Parameters: + Radius: same as point light + Power + cutoff angle + cutoff blur Spot lights are basically based on point lights. They have an additional calculation on light attenuation according to the output direction.
Spot Light radius = 1 cutoff blur 0.15 Spot Light radius = 2 cutoff blur 0.5
(##) 2. Parellization Rendering with Nanothread
    Code: src/scene.cpp

My code initializes multi-threading works according to different pixels. Nanothread is utilized.

Render time without multi-threading:
```
Will save rendered image to "../../scenes/final_project/jensen_box_diffuse_mis-2024-11-25-19-06-37.png"
Scene Rendering │█████████████████████████████████│ (2.442s)
```

Render time with multi-threading
```
Will save rendered image to "../../scenes/final_project/jensen_box_diffuse_mis-2024-11-25-19-13-09.png"
Scene Rendering │█████████████████████████████████│ (544ms)
```

Using multi-threading largely reduce the rendering time, while remains the result nearly same.
With MultiThread No MultiThread
(##) 3. Rough Conductor Material Code: src/materials/conductor.cpp
    I implemented a conductor material with roughness control, where:
    + Used eta and k to computes the fresnel effect.
    + GGX normal distribution function is used to model surface microstructure.
    + Smith geometry term is implemented for visibility and masking effects
    + my version does not support anisotropic.

My conductor material rendering results for various conductor materials:

+ Copper ("eta": [0.200438, 0.924033, 1.10221], "k": [3.91295, 2.45285, 2.14219])
Copper
+ Gold ("eta": [0.143119, 0.374957, 1.44248], "k": [3.98316, 2.38572, 1.60322])
Gold
+ Stainless ("eta": [1.65746, 0.880369, 0.521229], "k": [9.22387, 6.26952, 4.837])
Stainless
Rendering results with different roughness:
Gold 0.01 Gold 0.05 Gold 0.20 Gold 0.80
> (##) 4. Adaptive Sampling with Denoise
    3.1 Implement basic NL-means Denoise Function
        Code: src/scene.cpp/denoise_nlm()
        I implemented a basic Non-local Means denoising function. It leverages self-similarity
     in the image while using a patch-based approach.
        Parameters:
            + Search Radius: the size of the region around p where candidates pixels q are selected.
                            This is to balance the computational cost and accuracy.
            + Patch Radius: controls the size of the patch P(p), larger patches improve robustness but will
     blur small detail:
            + Filtering Strength

    Result of denoise:
   
NLM denoise before NLM denoise after
3.2 Implement joint NL-means Denoise Function with normal and albedo feature buffers. Joint NLM extends the above standard NLM by incorporating additional feature data. In my project, I utilized normals and albedo, to guide the denoising process. Now the patch distances are weighted computed through three components: color, normal and albedo. Distance = color_weight * color_dist + normal_weight * normal_dist + albedo_weight * albedo_dist; Color Difference: length2(Color(A) - Color(B)) Normal Difference: 1 - cos_theta(A,B) Albedo Difference: length2(Albedo(A) - Albedo(B)) We use the combined distance to compute the final weight. This could help preserve more detail information when denoise. Comparison between NLM and NLM with feature buffers:
NLM joint NLM
normal feature buffer albedo feature buffer
It could be hard to find obvious improve for simple jensen box geometry.
NLM joint NLM
normal feature buffer albedo feature buffer
While, for a complex geometry like ajax statue, simple NLM could blur the geometry details , such as the hair and beard. By utilizing normal information, these details could be preserved by Joint NLM. 3.3 Adaptive Sampling with Pixel Variance Estimates Adaptive Sampling could dynamically adjusts the number of samples taken by each pixels based on the estimated variance of the rendered colors. This could help renderer to concentrate on high noisy area. The first time I met with adaptive sampling is on MAYA Arnold renderer, where we have a parameter of *max_sample* and *threshold*. My adaptive sampling has these key steps: 1. Initial Sampling (minimum sample times) during which, the variance are estimated using Welford's online algorithm. This keeps us away from storaging all sampled colors in history. 2. Adaptive Sampling Pixels with a higher variance than the threshold will be pushed in a second pass until the variance falls below the threshold or the max_num_sample has been reached. 3. Stats Output a sampling heatmap and related data will be generated to visualize the adaptive sampling process. Comparison: A: sampling all pixels 128 times B: sampling adaptive with a min 16, max 128, threshold 0.005 ``` # Do not use adaptive sampling Will save rendered image to "../../scenes/final_project/jensen_box_diffuse_mis-2024-11-25-21-52-54.png" Scene Rendering │█████████████████████████████████│ (38.785s) # use adaptive sampling with min-16, max-256, threshold 0.01 Will save rendered image to "../../scenes/final_project/jensen_box_diffuse_mis-2024-11-25-21-55-09.png" Scene Rendering │█████████████████████████████████│ (32.152s) Saved sampling heatmap to: sample_heatmap.png Adaptive sampling statistics: Average samples per pixel: 103.21349 Min samples used: 16 Max samples used: 128 ```
no adaptive adaptive sampling adaptive sampling
adaptive sampling adaptive sampling
On heat map, the red color means it has more sample counts, the blue color means it has lower one. They have nearly no difference after denoising, while the total rendered time is saved. We can also increase the max sample time and decrease the threshold to improve the quality of noisy area. 3.4 Integrate Intel's Open Image Denoise Beyond the basic NLM and NLM joint, Intel's Open Image Denoise Library is integrated to improve the final quality. Integration: CMakeLists.txt ``` set(OIDN_DIR "PATH TO OIDN") include_directories(${OIDN_DIR}/include) link_directories(${OIDN_DIR}/lib) target_include_directories(darts PRIVATE ${OIDN_DIR}/include) target_link_libraries(darts PRIVATE darts_lib ${OIDN_DIR}/lib/OpenImageDenoise.lib ${OIDN_DIR}/lib/OpenImageDenoise_core.lib ) ``` Code: src/inlcude/sampler.h/denoiseImage_Intel() Intel's Open Image Denoise supports input of feature buffers as well.
raw image after using Intel's Denoise
(##) 5. Homogeneous Scattering Participating Media 1. Implemented homogeneous absorption and scattering effect with henyey_greenstein phase function. 2. Implemented Volumetric Path Tracer (using both NEE and MIS) 3. A medium is stored inside a material. (A dielectric material with medium could be easily implemented) 4. I have not implemented heterogeneous participating media. Here are a series of testing scene compared to blender Principled Volume Node. 1. Density = 1 with area light
mine ref
2. Density = 0.5 with area light
mine ref
3. Comparing with Volume Scattering Node in Blender (By setting total = color * density, set albedo to 1)
mine ref
4. Putting a medium inside another foggy medium
mine ref
(#) Final Image
finalimage